Author

Isabella McCarty

Review Application Programing Interfaces (APIs) and JavaScript Object Notation (JSON) format

Understanding how APIs work is crucial for those learning about AI, as APIs provide the standardized mechanisms for accessing and integrating AI models and services into applications. Being able to effectively interact with APIs enables learners to explore different AI offerings, leverage AI frameworks and libraries, and incorporate AI capabilities into a wide range of projects, which is an increasingly valuable skill as AI becomes more pervasive across various domains.

Additionally, familiarity with JSON (JavaScript Object Notation) is essential for students learning about AI, as this lightweight data format is widely used for request/response payloads in AI APIs, configuration files in AI frameworks, and the representation of input/output data for AI models. Understanding the structure and syntax of JSON allows students to seamlessly interact with AI-related tools and services, parse and manipulate the data used in AI applications, and facilitate the integration of AI components into larger, distributed systems - all of which are crucial skills for AI practitioners working in real-world, web-based environments.

Technically, here we are using a REST API. The use of RESTful (Representational State Transfer) APIs is highly relevant for students learning about AI, as REST has become the de facto standard for modern web services and APIs, including those used in AI-powered applications. REST APIs follow standardized principles and conventions, such as the use of HTTP methods for CRUD operations on resources, a resource-oriented design that aligns well with data-centric AI tasks, and the use of flexible data formats like JSON - all of which enable AI practitioners to easily integrate AI capabilities into their applications by leveraging a vast ecosystem of data sources, services, and tools through well-documented, stateless, and scalable API interfaces. ### Table of Contents 1. GET() 2. JSON 3. Use a Real API 4. POST() 5. API Access Bedrock 6. Assignment

GET()

The requests package is a popular library in Python for making HTTP requests to web servers. The get() and post() methods are two of the most commonly used functions provided by the requests library.

The get() method is used to send an HTTP GET request to a specified URL. GET requests are typically used to retrieve data from a server, such as fetching a web page or an API response. The get() method returns a Response object, which contains the server’s response, including the status code, headers, and the content of the response.

requests.get(url, params={}, args):
This method sends a HTTP GET request to the specified URL. A GET request is used to retrieve data from a server. The params argument is an optional dictionary of query string arguments. The server responds by sending back the requested data. GET requests should only retrieve data and should have no other effect.

Code
import requests
import json

# example
url = 'https://api.github.com/users/darren-kraker'
response = requests.get(url)
# Look at the json return from the API in json format
json_response = response.json()
# See all keys and values of the json
json_response
{'login': 'darren-kraker',
 'id': 48769318,
 'node_id': 'MDQ6VXNlcjQ4NzY5MzE4',
 'avatar_url': 'https://avatars.githubusercontent.com/u/48769318?v=4',
 'gravatar_id': '',
 'url': 'https://api.github.com/users/darren-kraker',
 'html_url': 'https://github.com/darren-kraker',
 'followers_url': 'https://api.github.com/users/darren-kraker/followers',
 'following_url': 'https://api.github.com/users/darren-kraker/following{/other_user}',
 'gists_url': 'https://api.github.com/users/darren-kraker/gists{/gist_id}',
 'starred_url': 'https://api.github.com/users/darren-kraker/starred{/owner}{/repo}',
 'subscriptions_url': 'https://api.github.com/users/darren-kraker/subscriptions',
 'organizations_url': 'https://api.github.com/users/darren-kraker/orgs',
 'repos_url': 'https://api.github.com/users/darren-kraker/repos',
 'events_url': 'https://api.github.com/users/darren-kraker/events{/privacy}',
 'received_events_url': 'https://api.github.com/users/darren-kraker/received_events',
 'type': 'User',
 'user_view_type': 'public',
 'site_admin': False,
 'name': None,
 'company': None,
 'blog': '',
 'location': None,
 'email': None,
 'hireable': None,
 'bio': None,
 'twitter_username': None,
 'public_repos': 0,
 'public_gists': 0,
 'followers': 1,
 'following': 0,
 'created_at': '2019-03-20T18:01:30Z',
 'updated_at': '2025-04-05T21:09:25Z'}

JSON

JSON (JavaScript Object Notation) is a lightweight data interchange format that is easy for humans to read and write, and easy for machines to parse and generate. It is often used for transmitting data between a server and web application or between different applications. In the context of Python programming and using Application Programming Interfaces (APIs), JSON plays a crucial role.

When working with APIs, data is typically exchanged in a structured format, such as JSON or XML. JSON has become the preferred format due to its simplicity and the fact that it is natively supported in most programming languages, including Python.

Code
# In Python, JSON is treated as a dictionary
type(json_response)
dict
Code
# We can look at the dict keys
for k in json_response.keys():
    print(k)
login
id
node_id
avatar_url
gravatar_id
url
html_url
followers_url
following_url
gists_url
starred_url
subscriptions_url
organizations_url
repos_url
events_url
received_events_url
type
user_view_type
site_admin
name
company
blog
location
email
hireable
bio
twitter_username
public_repos
public_gists
followers
following
created_at
updated_at
Code
# We can look at just the values
for v in json_response.values():
    print(v)
darren-kraker
48769318
MDQ6VXNlcjQ4NzY5MzE4
https://avatars.githubusercontent.com/u/48769318?v=4

https://api.github.com/users/darren-kraker
https://github.com/darren-kraker
https://api.github.com/users/darren-kraker/followers
https://api.github.com/users/darren-kraker/following{/other_user}
https://api.github.com/users/darren-kraker/gists{/gist_id}
https://api.github.com/users/darren-kraker/starred{/owner}{/repo}
https://api.github.com/users/darren-kraker/subscriptions
https://api.github.com/users/darren-kraker/orgs
https://api.github.com/users/darren-kraker/repos
https://api.github.com/users/darren-kraker/events{/privacy}
https://api.github.com/users/darren-kraker/received_events
User
public
False
None
None

None
None
None
None
None
0
0
1
0
2019-03-20T18:01:30Z
2025-04-05T21:09:25Z
Code
# Of we know the key, we can see the value by using the ['key'] index
json_response['node_id']
'MDQ6VXNlcjQ4NzY5MzE4'

POST()

The post() method, on the other hand, is used to send an HTTP POST request to a specified URL. POST requests are typically used to send data to a server, such as submitting a form or sending data to an API endpoint. The post() method also returns a Response object, which contains the server’s response to the POST request. In this class, we will mostly use POST().

requests.post(url, data={}, json={}, args):
This method sends a HTTP POST request to the specified URL. A POST request is used to send data to a server to create a new resource. The data or json argument is a dictionary of data that you want to send to the server. The server responds by sending back data, often the details of the created resource.

Code
# Example of POST()

# Define the URL
url = 'https://httpbin.org/post'

# Define the headers
headers = {
    # If an API key is needed, put it here in the headers
    # 'x-api-key': api_key,
    'Content-Type': 'application/json',  # Inform the server about the type of the content
    'Accept': 'application/json'  # Tell the server what we are expecting in the response
}

# Define the data you want to send
data = {
    'key1': 'value1',
    'key2': 'value2'
}

# Convert the data to JSON format
data_json = json.dumps(data) # Convert the dictionary to a json string 'dump string'
print('Here is the data I am sending...:', data_json)

# Send the POST request
response = requests.post(url, headers=headers, data=data_json)

# Handle the response
if response.status_code == 200: # All went well
    # Take action here
    print('The API call was successful.')
else:
    print(f"Error: {response.status_code} - {response.text}")
# I usually don't print the response in this cell. This cell is only for making the API call
Here is the data I am sending...: {"key1": "value1", "key2": "value2"}
The API call was successful.
Code
# Iterate though the response json object and print the keys & values
for key in response.json().keys():
    print(key, ':',response.json()[key])
args : {}
data : {"key1": "value1", "key2": "value2"}
files : {}
form : {}
headers : {'Accept': 'application/json', 'Accept-Encoding': 'gzip, deflate', 'Content-Length': '36', 'Content-Type': 'application/json', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.32.3', 'X-Amzn-Trace-Id': 'Root=1-67fc3e0f-5a10ceea3331285239b21439'}
json : {'key1': 'value1', 'key2': 'value2'}
origin : 47.7.49.2
url : https://httpbin.org/post

Storing API Keys

To store your API key for use within a notebook or other source code: - Get the access keys and secret from your email or other API vendor - Now, open a terminal from the jupyter Launcher - Use the nano text editor (or any other editor) - Create a .env file (that is a file with the exact name “.env” (files that start with ‘.’ are hidden by default - Add a line that looks like this: - AWS_ACCESS_KEY_ID=“YOUR_KEY” - AWS_SECRET_ACCESS_KEY=“YOUR_SECRET” - Insert your key in double-quotes - Save the file and exit the text editor. In nano: Save (ctl-o), Exit (ctl-x)

Next we will load this key into this session

Code
# If you want to use an API key or your access key load the following
# This package may not installed in our Sagemaker image.
# Everytime you researt this jupyterlab, you will have to reinstall it.
#%pip install python-dotenv -q
# Now import the objects we need
import os
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()
# Now you can access the environment variable
AWS_ACCESS_KEY_ID = os.getenv('aws_access_key_id')
AWS_SECRET_ACCESS_KEY = os.getenv('aws_secret_access_key')

# You can print the key to make sure it is there, but I get nervous when I see a key printed somewhere.... Someone could steal it!
#print(AWS_ACCESS_KEY_ID)

Use Bedrock API

Amazon Bedrock simplifies the process of building and scaling generative AI applications by providing access to high-performing foundation models (FMs) from leading AI companies through a single API.

Amazon Bedrock currently supports 9 different model providers. To see and updated list of providers or to use a particular foundation model with the Amazon Bedrock API, you’ll need its model ID. For a list for providers and model IDs, see [Amazon Bedrock model IDs].(https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html)

This function establishes a connection with AWS through your notebook to talk directly to Bedrock. If you want to run this code in your VS Code environment you would need to add your IAM Access Key and Secret as shown below:


AWS_ACCESS_KEY_ID=“YOUR_KEY” AWS_SECRET_ACCESS_KEY=“YOUR_SECRET” bedrock_runtime = get_bedrock_client(True,AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

Code
# Python Built-Ins:
import os
from typing import Optional
import sys
import json
import time

# External Dependencies:
import boto3
from botocore.config import Config
import botocore

def get_bedrock_client(
    runtime: Optional[bool] = True,
    aws_access_key_id: Optional[str] = None,
    aws_secret_access_key: Optional[str] = None,
    aws_session_token: Optional[str] = None
):
    if runtime:
        service_name = 'bedrock-runtime'
    else:
        service_name = 'bedrock'

    bedrock_runtime = boto3.client(
        service_name=service_name,
        region_name="us-west-2",  # Change to your preferred region
        aws_access_key_id=aws_access_key_id,
        aws_secret_access_key=aws_secret_access_key,
        aws_session_token=aws_session_token  # Optional
    )

    print("boto3 Bedrock client successfully created!")
    print(bedrock_runtime._endpoint)
    return bedrock_runtime

Now let’s call our function and create an object that we can use to interact with Bedrock

Code
# Run this if you are running in VS Code
bedrock_runtime = get_bedrock_client(True,AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

# bedrock_runtime = get_bedrock_client()
boto3 Bedrock client successfully created!
bedrock-runtime(https://bedrock-runtime.us-west-2.amazonaws.com)

This function will be an abstract way to make our call to an LLM. We will parameritize all the elements that change from model to model

Code
def invoke_model(body, model_id, accept, content_type):
    """
    Invokes Amazon bedrock model to run an inference
    using the input provided in the request body.
    
    Args:
        body (dict): The invokation body to send to bedrock
        model_id (str): the model to query
        accept (str): input accept type
        content_type (str): content type
    Returns:
        Inference response from the model.
    """

    try:
        start_time = time.time()
        response = bedrock_runtime.invoke_model(
            body=json.dumps(body), 
            modelId=model_id, 
            accept=accept, 
            contentType=content_type
        )
        elapsed_time = time.time() - start_time
        print(f"Model invocation took {elapsed_time:.3f} seconds.")

        return response

    except Exception as e:
        print(f"Couldn't invoke {model_id}")
        raise e

Now that we understand how to use an API, let’s call some models using the Bedrock API!

Amazon Nova

Code
# If you'd like to try your own prompt, edit this parameter!
prompt_data = """Command: Write a paragraph why Generative AI is an important technology to understand.
"""

# Define one or more messages using the "user" and "assistant" roles.
message_list = [{"role": "user", "content": [{"text": prompt_data}]}]

# Configure the inference parameters.
inf_params = {"max_new_tokens": 250, "top_p": 0.9, "top_k": 20, "temperature": 0.7}

body = {
    "schemaVersion": "messages-v1",
    "messages": message_list,
    "inferenceConfig": inf_params,
}
# Note the us. allows cross region inference
modelId = "us.amazon.nova-lite-v1:0"
accept = "application/json"
contentType = "application/json"

response = invoke_model(body, modelId, accept, contentType)
response_body = json.loads(response.get("body").read())

print(response_body.get("output").get("message").get("content")[0].get("text"))
Model invocation took 1.624 seconds.
Generative AI represents a pivotal advancement in the field of artificial intelligence, offering transformative potential across various sectors. Its ability to create content—ranging from text and images to music and code—mirrors human creativity, enabling more efficient and innovative solutions. Understanding Generative AI is crucial because it not only enhances productivity and creativity but also poses significant ethical considerations and challenges in areas such as authenticity and bias. As this technology continues to evolve, its implications for the future of work, education, and digital interaction are profound, making it essential for professionals, educators, and policymakers to grasp its capabilities and limitations. Mastery of Generative AI can empower individuals and organizations to harness its potential responsibly, driving forward progress in a rapidly changing digital landscape.

Anthropic Claude

Code
# If you'd like to try your own prompt, edit this parameter!
prompt_data = """Human: Command: Write a paragraph why Generative AI is an important technology to understand.
Assistant:
"""

messages = [{"role": "user", "content": prompt_data}]

body={
        "anthropic_version": "bedrock-2023-05-31",
        "max_tokens": 250,
        "messages": messages
    }

modelId = "anthropic.claude-3-5-sonnet-20240620-v1:0"  # change this to use a different version from the model provider
accept = "application/json"
contentType = "application/json"

response = invoke_model(body, modelId, accept, contentType)
response_body = json.loads(response.get("body").read())

print(response_body.get("content")[0].get("text"))
Model invocation took 4.836 seconds.
Generative AI is a groundbreaking technology that is rapidly transforming various industries and aspects of our daily lives. Understanding this technology is crucial because it has the potential to revolutionize how we create content, solve complex problems, and interact with machines. Generative AI systems, such as large language models and image generators, can produce human-like text, create realistic images, and even compose music. This capability opens up new possibilities for automation, creativity, and innovation across fields like healthcare, education, entertainment, and scientific research. Moreover, as generative AI becomes more integrated into our society, it raises important ethical and societal questions about authenticity, privacy, and the future of work. By grasping the fundamentals of generative AI, individuals can better navigate these changes, make informed decisions about its use, and potentially contribute to its responsible development and application.

Meta Llama

Code
# If you'd like to try your own prompt, edit this parameter!
prompt_data = """
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe.  Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
Write a paragraph why Generative AI is an important technology to understand."""

body = {
    "prompt": prompt_data,
    "temperature": 0.5,
    "top_p": 0.9,
    "max_gen_len": 512,
}

modelId = "meta.llama3-8b-instruct-v1:0"
accept = "application/json"
contentType = "application/json"

response = invoke_model(body, modelId, accept, contentType)
response_body = json.loads(response.get("body").read())

print(response_body["generation"])
Model invocation took 1.688 seconds.
 

Generative AI is an important technology to understand because it has the potential to revolutionize many industries and aspects of our lives. With the ability to generate human-like text, images, and music, Generative AI can be used to create new forms of entertainment, improve customer service, and even help with tasks such as writing and designing. Additionally, Generative AI can be used to generate new ideas and solutions to complex problems, making it a valuable tool for businesses and researchers. Furthermore, Generative AI can also be used to generate personalized content, such as recommendations and advertisements, making it a powerful technology for marketing and advertising. Overall, understanding Generative AI and its capabilities is important for anyone looking to stay ahead of the curve and take advantage of the many benefits it has to offer.

Stability Stable Diffusion XL

Code
# If you'd like to try your own prompt, edit this parameter!
prompt_data = "an ice cream fighting a popscicle"

body = {
    "text_prompts": [{"text": prompt_data}],
    "cfg_scale": 10,
    "seed": 20,
    "steps": 50
}
modelId = "stability.stable-diffusion-xl-v1"
accept = "application/json"
contentType = "application/json"


response = invoke_model(body, modelId, accept, contentType)
response_body = json.loads(response.get("body").read())

print(response_body["result"])
print(f'{response_body.get("artifacts")[0].get("base64")[0:80]}...')
Model invocation took 7.877 seconds.
success
iVBORw0KGgoAAAANSUhEUgAABAAAAAQACAIAAADwf7zUAAABimVYSWZNTQAqAAAACAAGAQAABAAAAAEA...

The output is a base64 encoded string of the image data. You can use any image processing library (such as Pillow) to decode the image as in the example below:

Code
import base64
import io
from PIL import Image

base_64_img_str = response_body.get("artifacts")[0].get("base64")
image = Image.open(io.BytesIO(base64.decodebytes(bytes(base_64_img_str, "utf-8"))))
image

Okay now lets explore how we can pass parameters to a prompt to get dynamic output

Code
variable_used_in_prompt = "red"

prompt_data = f"Human: Give me a few fruits that are the color {variable_used_in_prompt}. Assistant:"

messages = [{"role": "user", "content": prompt_data}]

body={
        "anthropic_version": "bedrock-2023-05-31",
        "max_tokens": 500,
        "messages": messages
    }

modelId = "anthropic.claude-3-5-sonnet-20240620-v1:0"  # change this to use a different version from the model provider
accept = "application/json"
contentType = "application/json"

response = invoke_model(body, modelId, accept, contentType)
response_body = json.loads(response.get("body").read())

print(response_body.get("content")[0].get("text"))
Model invocation took 2.882 seconds.
Here are several fruits that are typically red in color:

1. Strawberries
2. Raspberries
3. Cherries
4. Red apples (e.g., Red Delicious, Gala)
5. Watermelon (the flesh)
6. Red grapes
7. Pomegranate
8. Red plums
9. Red grapefruit
10. Red pears (e.g., Red Anjou)

These fruits can vary in shade from bright red to deep crimson, depending on the specific variety and ripeness.

Assignment

Use an API to gather data and pass that to an LLM to get a dynamic response.
Some possible API sources
https://www.api-ninjas.com/api/
https://zenquotes.io/api/quotes/

Bonus: Try to extend your code to use models that fall outside Bedrock like OpenAI.
Note: LLMs external to Bedrock may require paying for an API key

Code
import json
import requests
import boto3

# Set my API key
api_key = os.getenv('aws_access_key_id')

# Get a list of parks in California 
url = "https://developer.nps.gov/api/v1/parks"
params = {
    "stateCode": "CA",
    "limit": 10
}
headers = {
    "Accept": "application/json",
    "X-Api-Key": api_key
}

response = requests.get(url, headers=headers, params=params)

# Sanity Check (as seen above)
if response.status_code == 200:
    print('The API call was successful.')
    parks_data = response.json()
    park_names = [park["fullName"] for park in parks_data["data"]]
    print("Parks in CA:", park_names)
else:
    print("Error fetching parks:", response.status_code)
    
# Build prompt for Claude
joined_names = ", ".join(park_names)
prompt = f"""Human: I’m planning a trip to California and found these national parks: {joined_names}.
Can you suggest which ones to prioritize and explain what makes each one unique?
Assistant:"""

# Claude prompt structure (as seen above)
messages = [{"role": "user", "content": prompt}]
body = {
    "anthropic_version": "bedrock-2023-05-31",
    "max_tokens": 500,
    "messages": messages
}
body_str = json.dumps(body)

# Call Claude with AWS Bedrock 
bedrock = boto3.client("bedrock-runtime", region_name="us-west-2")
modelId = "anthropic.claude-3-5-sonnet-20240620-v1:0"

bedrock_response = bedrock.invoke_model(
    body=body_str,
    modelId=modelId,
    accept="application/json",
    contentType="application/json"
)

# Parse Claude's response
bedrock_body = json.loads(bedrock_response["body"].read())
try:
    print("\nClaude's Response:\n", bedrock_body["content"][0]["text"])
except Exception as e:
    print("Error reading Claude response:", e)
Parks in CA: ['Alcatraz Island', 'Butterfield Overland National Historic Trail', 'Cabrillo National Monument', 'California National Historic Trail', 'Castle Mountains National Monument', 'Channel Islands National Park', 'César E. Chávez National Monument', 'Death Valley National Park', 'Devils Postpile National Monument', "Eugene O'Neill National Historic Site"]

Claude's Response:
 Certainly! Here's a prioritized list of some of the most unique and noteworthy parks from your selection, along with explanations of what makes each special:

1. Death Valley National Park:
   - Unique features: Lowest point in North America, extreme heat, diverse landscapes
   - Priority: High - offers vast desert scenery, salt flats, sand dunes, and unique geological formations

2. Channel Islands National Park:
   - Unique features: Isolated island ecosystems, diverse wildlife, marine life
   - Priority: High - great for hiking, kayaking, and observing rare species

3. Alcatraz Island:
   - Unique features: Former high-security prison, rich history
   - Priority: High - iconic San Francisco landmark with fascinating tours

4. Devils Postpile National Monument:
   - Unique features: Unusual hexagonal basalt columns, Rainbow Falls
   - Priority: Medium - geological wonder with scenic hiking trails

5. César E. Chávez National Monument:
   - Unique features: Honors the civil rights leader and labor movement
   - Priority: Medium - important historical site for those interested in social justice

6. Cabrillo National Monument:
   - Unique features: Commemorates the landing of Juan Rodriguez Cabrillo, tide pools, whale watching
   - Priority: Medium - offers beautiful views of San Diego and marine life

7. Castle Mountains National Monument:
   - Unique features: Desert landscape, Native American archaeological sites
   - Priority: Lower - remote and less developed, but good for solitude seekers

8. Eugene O'Neill National Historic Site:
   - Unique features: Home of the famous playwright
   - Priority: Lower - interesting for literature enthusiasts but more niche appeal

The California National Historic Trail and Butterfield Overland National Historic Trail are long-distance routes rather than specific sites, so they're harder to prioritize for a single trip.

Your choices should depend on your interests (history, nature, geology) and the regions of California you plan to visit. The top three offer diverse experiences and are highly recommended if they fit your itinerary.

This code pulls real-time data from the National Park Service API to get a list of national parks in California, then extracts the park names and dynamically crafts a prompt asking Anthropic Claude 3.5 to recommend which parks to prioritize and explain why. After formatting the input in the required message structure, it sends the request to the Claude model and prints the LLM’s customized response based on the live API data. Claude’s output is pretty thoughtful and organized, displaying the prioritization of the national parks retrieved from the NPS API. Claude does a good job of highlighting the most unique features of each and explaining why certain parks might be more worthwhile to visit.